Big data. It’s the buzz word. It’s something we recognize as important to driving and creating trust in our decision making process. We generate data quickly and in significant amounts, but it’s complicated — and quickly overwhelming. In 2017 the Harvard Business Review reported that:
As a team who uses “all readily available and credible data” throughout the state, navigating and solving the challenges of working with big datasets from disparate data sources is something we continuously have to navigate. Through the Department’s first continuous improvement efforts, the Integrated Report’s (IR) Programming Team worked on re-evaluating and -imagining how we use technology to visualize and overcome the challenges with big data. We set out to become more efficient, agile, and credible in our decision making processes when reporting on the quality of the state’s surface waters. This summary discusses the challenges and successes from the IR team’s continuous improvements efforts over the last three years.
>1000 unique criteria
Multiple data sources
1. Internal data
2. Partner agencies
3. Stakeholders
Interpretation and analysis issues:
1. Field & lab methods
2. Parameters
3. Site types
Previous approaches:
Native web format of Utah water quality criteria.
Flattened database format of Utah water quality criteria.
Text format geographical beneficial use descriptions and site specific criteria needed to be digitized into geographical polygons to allow automatic assignment of uses and criteria to water quality monitoring locations.